2,137 research outputs found

    Influence of Sulfur-Containing Diamino Acid Structure on Covalently Crosslinked Copolypeptide Hydrogels.

    Get PDF
    Biologically occurring non-canonical di-α-amino acids were converted into new di-N-carboxyanhydride (di-NCA) monomers in reasonable yields with high purity. Five different di-NCAs were separately copolymerized with tert-butyl-l-glutamate NCA to obtain covalently crosslinked copolypeptides capable of forming hydrogels with varying crosslinker density. Comparison of hydrogel properties with residue structure revealed that different di-α-amino acids were not equivalent in crosslink formation. Notably, l-cystine was found to produce significantly weaker hydrogels compared to l-homocystine, l-cystathionine, and l-lanthionine, suggesting that l-cystine may be a sub-optimal choice of di-α-amino acid for preparation of copolypeptide networks. The di-α-amino acid crosslinkers also provided different chemical stability, where disulfide crosslinks were readily degraded by reduction, and thioether crosslinks were stable against reduction. This difference in response may provide a means to fine tune the reduction sensitivity of polypeptide biomaterial networks

    Disease activity and cognition in rheumatoid arthritis : an open label pilot study

    Get PDF
    Acknowledgements This work was supported in part by NIHR Newcastle Biomedical Research Centre. Funding for this study was provided by Abbott Laboratories. Abbott Laboratories were not involved in study design; in the collection, analysis and interpretation of data; or in the writing of the report.Peer reviewedPublisher PD

    Time-and-motion tool for the assessment of working time in tuberculosis laboratories: a multicentre study

    Get PDF
    SETTING: Implementation of novel diagnostic assays in tuberculosis (TB) laboratory diagnosis requires effective management of time and resources. OBJECTIVE: To further develop and assess at multiple centres a time-and-motion (T&M) tool as an objective means for recording the actual time spent on running laboratory assays. DESIGN: Multicentre prospective study conducted in six European Union (EU) reference TB laboratories. RESULTS: A total of 1060 specimens were tested using four laboratory assays. The number of specimens per batch varied from one to 60; a total of 64 recordings were performed. Theoretical hands-on times per specimen (TTPS) in h:min:s for XpertŸ MTB/RIF, mycobacterial interspersed repetitive unit-variable number of tandem repeats genotyping, Ziehl-Neelsen staining and manual fluorescence microscopy were respectively 00:33:02 ± 00:12:32, 00:13:34 ± 00:03:11, 00:09:54 ± 00:00:53 and 00:06:23 ± 00:01:36. Variations between laboratories were predominantly linked to the time spent on reporting and administrative procedures. Processing specimens in batches could help save time in highly automated assays (e.g., line-probe) (TTPS 00:14:00 vs. 00:09:45 for batches comprising 7 and 31 specimens, respectively). CONCLUSIONS: The T&M tool can be considered a universal and objective methodology contributing to workload assessment in TB diagnostic laboratories. Comparison of workload between laboratories could help laboratory managers justify their resource and personnel needs for the implementation of novel, time-saving, cost-effective technologies, as well as identify areas for improvement

    Tests of Bayesian Model Selection Techniques for Gravitational Wave Astronomy

    Full text link
    The analysis of gravitational wave data involves many model selection problems. The most important example is the detection problem of selecting between the data being consistent with instrument noise alone, or instrument noise and a gravitational wave signal. The analysis of data from ground based gravitational wave detectors is mostly conducted using classical statistics, and methods such as the Neyman-Pearson criteria are used for model selection. Future space based detectors, such as the \emph{Laser Interferometer Space Antenna} (LISA), are expected to produced rich data streams containing the signals from many millions of sources. Determining the number of sources that are resolvable, and the most appropriate description of each source poses a challenging model selection problem that may best be addressed in a Bayesian framework. An important class of LISA sources are the millions of low-mass binary systems within our own galaxy, tens of thousands of which will be detectable. Not only are the number of sources unknown, but so are the number of parameters required to model the waveforms. For example, a significant subset of the resolvable galactic binaries will exhibit orbital frequency evolution, while a smaller number will have measurable eccentricity. In the Bayesian approach to model selection one needs to compute the Bayes factor between competing models. Here we explore various methods for computing Bayes factors in the context of determining which galactic binaries have measurable frequency evolution. The methods explored include a Reverse Jump Markov Chain Monte Carlo (RJMCMC) algorithm, Savage-Dickie density ratios, the Schwarz-Bayes Information Criterion (BIC), and the Laplace approximation to the model evidence. We find good agreement between all of the approaches.Comment: 11 pages, 6 figure

    Latent class analysis variable selection

    Get PDF
    We propose a method for selecting variables in latent class analysis, which is the most common model-based clustering method for discrete data. The method assesses a variable's usefulness for clustering by comparing two models, given the clustering variables already selected. In one model the variable contributes information about cluster allocation beyond that contained in the already selected variables, and in the other model it does not. A headlong search algorithm is used to explore the model space and select clustering variables. In simulated datasets we found that the method selected the correct clustering variables, and also led to improvements in classification performance and in accuracy of the choice of the number of classes. In two real datasets, our method discovered the same group structure with fewer variables. In a dataset from the International HapMap Project consisting of 639 single nucleotide polymorphisms (SNPs) from 210 members of different groups, our method discovered the same group structure with a much smaller number of SNP

    The Mass Distribution of Stellar-Mass Black Holes

    Get PDF
    We perform a Bayesian analysis of the mass distribution of stellar-mass black holes using the observed masses of 15 low-mass X-ray binary systems undergoing Roche lobe overflow and five high-mass, wind-fed X-ray binary systems. Using Markov Chain Monte Carlo calculations, we model the mass distribution both parametrically---as a power law, exponential, gaussian, combination of two gaussians, or log-normal distribution---and non-parametrically---as histograms with varying numbers of bins. We provide confidence bounds on the shape of the mass distribution in the context of each model and compare the models with each other by calculating their relative Bayesian evidence as supported by the measurements, taking into account the number of degrees of freedom of each model. The mass distribution of the low-mass systems is best fit by a power-law, while the distribution of the combined sample is best fit by the exponential model. We examine the existence of a "gap" between the most massive neutron stars and the least massive black holes by considering the value, M_1%, of the 1% quantile from each black hole mass distribution as the lower bound of black hole masses. The best model (the power law) fitted to the low-mass systems has a distribution of lower-bounds with M_1% > 4.3 Msun with 90% confidence, while the best model (the exponential) fitted to all 20 systems has M_1% > 4.5 Msun with 90% confidence. We conclude that our sample of black hole masses provides strong evidence of a gap between the maximum neutron star mass and the lower bound on black hole masses. Our results on the low-mass sample are in qualitative agreement with those of Ozel, et al (2010).Comment: 56 pages, 22 figures, 9 tables, as accepted by Ap

    A Bayesian Approach to the Detection Problem in Gravitational Wave Astronomy

    Full text link
    The analysis of data from gravitational wave detectors can be divided into three phases: search, characterization, and evaluation. The evaluation of the detection - determining whether a candidate event is astrophysical in origin or some artifact created by instrument noise - is a crucial step in the analysis. The on-going analyses of data from ground based detectors employ a frequentist approach to the detection problem. A detection statistic is chosen, for which background levels and detection efficiencies are estimated from Monte Carlo studies. This approach frames the detection problem in terms of an infinite collection of trials, with the actual measurement corresponding to some realization of this hypothetical set. Here we explore an alternative, Bayesian approach to the detection problem, that considers prior information and the actual data in hand. Our particular focus is on the computational techniques used to implement the Bayesian analysis. We find that the Parallel Tempered Markov Chain Monte Carlo (PTMCMC) algorithm is able to address all three phases of the anaylsis in a coherent framework. The signals are found by locating the posterior modes, the model parameters are characterized by mapping out the joint posterior distribution, and finally, the model evidence is computed by thermodynamic integration. As a demonstration, we consider the detection problem of selecting between models describing the data as instrument noise, or instrument noise plus the signal from a single compact galactic binary. The evidence ratios, or Bayes factors, computed by the PTMCMC algorithm are found to be in close agreement with those computed using a Reversible Jump Markov Chain Monte Carlo algorithm.Comment: 19 pages, 12 figures, revised to address referee's comment

    Evaluation of Beam Quality Study of Arbitrary Beam Profiles from On-Wafer Vertical Cavity Surface Emitting Lasers

    Get PDF
    Vertical cavity surface emitting lasers (VCSELs) have found mainstream use in data centers and short-haul optical fiber communications. Along with the increase in the capacity of such systems comes an increase in the demand for greater power efficiency. System evaluation now includes an assessment of the energy required for each bit of data, a metric referred to as ‘joules per bit’. One source of loss for VCSELs is coupling loss, which is due to a mismatch in the mode profiles of the VCSELs and the optical fiber into which the VSCEL light is coupled. One way to reduce this loss is to develop single-mode VCSEL devices that are modally matched to optical fiber. Efficient development of these devices requires a technique for rapidly evaluating beam quality. This study investigates the use of a vertically mounted commercial beam profiling system and hardware interface software to quickly evaluate the beam quality of arbitrary beam profiles from on-wafer mounted VCSEL devices. This system captures the beam profile emitted from a VCSEL device at fixed locations along the vertical axis. Each image is evaluated within software along a predetermined axis, and the beam quality, or M2, is calculated according to international standards. This system is quantitatively compared against a commercial software package designed for determining beam quality across a fixed axis

    Extreme and rapid bursts of functional adaptations shape bite force in amniotes

    Get PDF
    Adaptation is the fundamental driver of functional and biomechanical evolution. Accordingly, the states of biomechanical traits (absolute or relative trait values) have long been used as proxies for adaptations in response to direct selection. However, ignoring evolutionary history, in particular ancestry, passage of time and the rate of evolution, can be misleading. Here, we apply a recently developed phylogenetic statistical approach using significant rate shifts to detect instances of exceptional rates of adaptive changes in bite force in a large group of terrestrial vertebrates, the amniotes. Our results show that bite force in amniotes evolved through multiple bursts of exceptional rates of adaptive changes, whereby whole groups—including Darwin's finches, maniraptoran dinosaurs (group of non-avian dinosaurs including birds), anthropoids and hominins (fossil and modern humans)—experienced significant rate increases compared to the background rate. However, in most parts of the amniote tree of life, we find no exceptional rate increases, indicating that coevolution with body size was primarily responsible for the patterns observed in bite force. Our approach represents a template for future studies in functional morphology and biomechanics, where exceptional rates of adaptive changes can be quantified and potentially linked to specific ecological factors underpinning major evolutionary radiation
    • 

    corecore